66 research outputs found

    Exploring Dynamic Compilation and Cross-Layer Object Management Policies for Managed Language Applications

    Get PDF
    Recent years have witnessed the widespread adoption of managed programming languages that are designed to execute on virtual machines. Virtual machine architectures provide several powerful software engineering advantages over statically compiled binaries, such as portable program representations, additional safety guarantees, automatic memory and thread management, and dynamic program composition, which have largely driven their success. To support and facilitate the use of these features, virtual machines implement a number of services that adaptively manage and optimize application behavior during execution. Such runtime services often require tradeoffs between efficiency and effectiveness, and different policies can have major implications on the system's performance and energy requirements. In this work, we extensively explore policies for the two runtime services that are most important for achieving performance and energy efficiency: dynamic (or Just-In-Time (JIT)) compilation and memory management. First, we examine the properties of single-tier and multi-tier JIT compilation policies in order to find strategies that realize the best program performance for existing and future machines. Our analysis performs hundreds of experiments with different compiler aggressiveness and optimization levels to evaluate the performance impact of varying if and when methods are compiled. We later investigate the issue of how to optimize program regions to maximize performance in JIT compilation environments. For this study, we conduct a thorough analysis of the behavior of optimization phases in our dynamic compiler, and construct a custom experimental framework to determine the performance limits of phase selection during dynamic compilation. Next, we explore innovative memory management strategies to improve energy efficiency in the memory subsystem. We propose and develop a novel cross-layer approach to memory management that integrates information and analysis in the VM with fine-grained management of memory resources in the operating system. Using custom as well as standard benchmark workloads, we perform detailed evaluation that demonstrates the energy-saving potential of our approach. We implement and evaluate all of our studies using the industry-standard Oracle HotSpot Java Virtual Machine to ensure that our conclusions are supported by widely-used, state-of-the-art runtime technology

    Understanding Optimization Phase Interactions to Reduce the Phase Order Search Space

    Get PDF
    Compiler optimization phase ordering is a longstanding problem, and is of particular relevance to the performance-oriented and cost-constrained domain of embedded systems applications. Optimization phases are known to interact with each other, enabling and disabling opportunities for successive phases. Therefore, varying the order of applying these phases often generates distinct output codes, with different speed, code-size and power consumption characteristics. Most cur- rent approaches to address this issue focus on developing innovative methods to selectively evaluate the vast phase order search space to produce a good (but, potentially suboptimal) representation for each program. In contrast, the goal of this thesis is to study and reduce the phase order search space by: (1) identifying common causes of optimization phase interactions across all phases, and then devising techniques to eliminate them, and (2) exploiting natural phase independence to prune the phase order search space. We observe that several phase interactions are caused by false register dependence during many optimization phases. We explore the potential of cleanup phases, such as register remapping and copy propagation, at reducing false dependences. We show that innovative implementation and application of these phases not only reduces the size of the phase order search space substantially, but can also improve the quality of code generated by optimizing compilers. We examine the effect of removing cleanup phases, such as dead assignment elimination, which should not interact with other compiler phases, from the phase order search space. Finally, we show that reorganization of the phase order search into a multi-staged approach employing sets of mutually independent optimizations can reduce the search space to a fraction of its original size without sacrificing performance

    Comet Assay Automation for DNA Testing

    Get PDF
    Single cell gel electrophoresis, also known as comet assay, is a process used to study the formation and repair of DNA damage. Comet assay is gaining popularity as industry and academic institutions begin to use the process more for single cell analysis. Some of the limiting factors to Comet Assay’s increased implementation is its’ low sample throughput, inherent inaccuracy, inconsistency due to human error, inaccurate temperature control, and laboratories’ long sample workup procedure. In order to increase the effectiveness of comet assay, it is necessary to achieve accurate temperature control and remove human intrusion in the process while maintaining consistent results. We are addressing these needs by creating a device that automates the entire process of Comet Assay up until scoring. Automation will remove the need for human intervention in the process, and will allow for consistent and accurate temperature control as well as the prevention of light contamination. All of this will lead to a more reliable outcome in the experiments, while allowing lab employees to be more efficient by eliminating the need for supervision and constant attendance in the long and tedious process. This automated approach is a significant advancement in Comet Assay experimentation

    A Digital Library and Digital Preservation Architecture Based on Fedora

    Get PDF
    This presentation was given at The Library in Bits and Bytes: Digital Library Symposium, held at the University of Maryland on 29 September 2005

    Influence of Fuel Injection System and Engine-Timing Adjustments on Regulated Emissions from Four Biodiesel Fuels

    Get PDF
    The use of biofuels for transportation has grown substantially in the past decade in response to federal mandates and increased concern about the use of petroleum fuels. As biofuels become more common, it is imperative to assess their influence on mobile source emissions of regulated and hazardous pollutants. This assessment cannot be done without first obtaining a basic understanding of how biofuels affect the relationship between fuel properties, engine design, and combustion conditions. Combustion studies were conducted on biodiesel fuels from four feedstocks (palm oil, soybean oil, canola oil, and coconut oil) with two injection systems, mechanical and electronic. For the electronic system, fuel injection timing was adjusted to compensate for physical changes caused by different fuels. The emissions of nitrogen oxides (NOx) and partial combustion products were compared across both engine injection systems. The analysis showed differences in NOx emissions based on hydrocarbon chain length and degree of fuel unsaturation, with little to no NOx increase compared with ultra-low sulfur diesel fuel for most conditions. Adjusting the fuel injection timing provided some improvement in biodiesel emissions for NOx and particulate matter, particularly at lower engine loads. The results indicated that the introduction of biodiesel and biodiesel blends could have widely dissimilar effects in different types of vehicle fleets, depending on typical engine design, age, and the feedstock used for biofuel production

    Write-rationing garbage collection for hybrid memories

    Get PDF
    Emerging Non-Volatile Memory (NVM) technologies offer high capacity and energy efficiency compared to DRAM, but suffer from limited write endurance and longer latencies. Prior work seeks the best of both technologies by combining DRAM and NVM in hybrid memories to attain low latency, high capacity, energy efficiency, and durability. Coarse-grained hardware and OS optimizations then spread writes out (wear-leveling) and place highly mutated pages in DRAM to extend NVM lifetimes. Unfortunately even with these coarse-grained methods, popular Java applications exact impractical NVM lifetimes of 4 years or less. This paper shows how to make hybrid memories practical, without changing the programming model, by enhancing garbage collection in managed language runtimes. We find object write behaviors offer two opportunities: (1) 70% of writes occur to newly allocated objects, and (2) 2% of objects capture 81% of writes to mature objects. We introduce writerationing garbage collectors that exploit these fine-grained behaviors. They extend NVM lifetimes by placing highly mutated objects in DRAM and read-mostly objects in NVM. We implement two such systems. (1) Kingsguard-nursery places new allocation in DRAM and survivors in NVM, reducing NVM writes by 5x versus NVM only with wear-leveling. (2) Kingsguard-writers (KG-W) places nursery objects in DRAM and survivors in a DRAM observer space. It monitors all mature object writes and moves unwritten mature objects from DRAM to NVM. Because most mature objects are unwritten, KG-W exploits NVM capacity while increasing NVM lifetimes by 11x. It reduces the energy-delay product by 32% over DRAM-only and 29% over NVM-only. This work opens up new avenues for making hybrid memories practical

    Meeting reports: Research on Coupled Human and Natural Systems (CHANS): Approach, Challenges, and Strategies

    Get PDF
    Understanding the complexity of human–nature interactions is central to the quest for both human well-being and global sustainability. To build an understanding of these interactions, scientists, planners, resource managers, policymakers, and communities increasingly are collaborating across wide-ranging disciplines and knowledge domains. Scientists and others are generating new integrated knowledge on top of their requisite specialized knowledge to understand complex systems in order to solve pressing environmental and social problems (e.g., Carpenter et al. 2009). One approach to this sort of integration, bringing together detailed knowledge of various disciplines (e.g., social, economic, biological, and geophysical), has become known as the study of Coupled Human and Natural Systems, or CHANS (Liu et al. 2007a, b). In 2007 a formal standing program in Dynamics of Coupled Natural and Human Systems was created by the U.S. National Science Foundation. Recently, the program supported the launch of an International Network of Research on Coupled Human and Natural Systems (CHANS-Net.org). A major kick-off event of the network was a symposium on Complexity in Human–Nature Interactions across Landscapes, which brought together leading CHANS scientists at the 2009 meeting of the U.S. Regional Association of the International Association for Landscape Ecology in Snowbird, Utah. The symposium highlighted original and innovative research emphasizing reciprocal interactions between human and natural systems at multiple spatial, temporal, and organizational scales. The presentations can be found at ‹http://chans- net.org/Symposium_2009.aspx›. The symposium was accompanied by a workshop on Challenges and Opportunities in CHANS Research. This article provides an overview of the CHANS approach, outlines the primary challenges facing the CHANS research community, and discusses potential strategies to meet these challenges, based upon the presentations and discussions among participants at the Snowbird meeting

    Digital Preservation: Architecture and Technology for Trusted Digital Repositories

    No full text
    Developing preservation processes for a trusted digital repository will require the integration of new methods, policies, standards, and technologies. Digital repositories should be able to preserve electronic materials for periods at least comparable to existing preservation methods. Modern computing technology in general is barely fifty years old and few of us have seen or used digital objects that are more than ten years old. While traditional preservation practices are comparatively well-developed, lack of experience and lack of consensus raise some questions about how we should proceed with digital-based preservation processes. Can wepreserve a digital object for at least one-hundred years? Can we answer questions such as “Is this object the digital original”? or “How old is this digital object”? What does it mean to be a trusted repository of digital materials? A basic premise of this article is that there are many technologies available today that will help us build trust in a digital preservation process and that these technologies can be readily integrated into an operational digital preservation framework.The published version of this article is available at: http://www.dlib.org/dlib/june05/jantz/06jantz.htmlPeer reviewe

    Calibrating and Validating a Simulation Model to Identify Drivers of Urban Land Cover Change in the Baltimore, MD Metropolitan Region

    No full text
    We build upon much of the accumulated knowledge of the widely used SLEUTH urban land change model and offer advances. First, we use SLEUTH’s exclusion/attraction layer to identify and test different urban land cover change drivers; second, we leverage SLEUTH’s self-modification capability to incorporate a demographic model; and third, we develop a validation procedure to quantify the influence of land cover change drivers and assess uncertainty. We found that, contrary to our a priori expectations, new development is not attracted to areas serviced by existing or planned water and sewer infrastructure. However, information about where population and employment growth is likely to occur did improve model performance. These findings point to the dominant role of centrifugal forces in post-industrial cities like Baltimore, MD. We successfully developed a demographic model that allowed us to constrain the SLEUTH model forecasts and address uncertainty related to the dynamic relationship between changes in population and employment and urban land use. Finally, we emphasize the importance of model validation. In this work the validation procedure played a key role in rigorously assessing the impacts of different exclusion/attraction layers and in assessing uncertainty related to population and employment forecasts

    Improving Both the Performance Benefits and Speed of Optimization Phase Sequence Searches

    No full text
    The issues of compiler optimization phase ordering and selection present important challenges to compiler developers in several domains, and in particular to the speed, code size, power, and costconstrained domain of embedded systems. Different sequences of optimization phases have been observed to provide the best performance for different applications. Compiler writers and embedded systems developers have recently addressed this problem by conducting iterative empirical searches using machine-learning based heuristic algorithms in an attempt to find the phase sequences that are most effective for each application. Such searches are generally performed at the program level, although a few studies have been performed at the function level. The finer granularity of functionlevel searches has the potential to provide greater overall performance benefits, but only at the cost of slower searches caused by a greater number of performance evaluations that often require expensive program simulations. In this paper, we evaluate the performance benefits and search time increases of functionlevel approaches as compared to their program-level counterparts. We, then, present a novel search algorithm that conducts distinct function-level searches simultaneously, but requires only a single program simulation for evaluating the performance of potentially unique sequences for each function. Thus, our new hybrid search strategy provides the enhanced performance benefits of functionlevel searches with a search-time cost that is comparable to or less than program-level searches
    • …
    corecore